Skip to content

feat: fat-slices repo-architect execution path via GitHub Actions#7

Merged
Steake merged 4 commits intomainfrom
copilot/update-repo-architect-workflow
Mar 13, 2026
Merged

feat: fat-slices repo-architect execution path via GitHub Actions#7
Steake merged 4 commits intomainfrom
copilot/update-repo-architect-workflow

Conversation

Copy link
Copy Markdown

Copilot AI commented Mar 12, 2026

Description

Adds a repeatable, automated "fat slices" execution path that dispatches repo_architect.py as a sequence of scoped GitHub Actions runs — report and mutate-by-lane slices — with zero manual editing between runs. Rebased on top of main (post-PR #8) to keep all lane scoping, model fallback, campaign mode, and entrypoint consolidation hardening intact.

Workflow — .github/workflows/repo-architect.yml

No changes from main (PR #8). The workflow inputs (mode, lanes, max_slices, github_model, mutation_budget) and the REPO_ARCHITECT_BRANCH_SUFFIX / model-fallback env vars are fully preserved as merged.

Orchestration script — scripts/run_repo_architect_slices.sh

  • Dispatches 6 sequential slices; waits for each via gh run watch --exit-status before proceeding
  • Dispatch interface aligned to PR feat: complete repo_architect methodology — lane scoping, enhanced validation, entrypoint_consolidation, 57 tests, campaign report #8 workflow: dispatches with mode + lanes fields; the obsolete lane, targets, and allow_dirty fields are removed
  • Race-condition-safe, dispatch-bound run-ID resolution: captures DISPATCH_TIME (UTC ISO-8601) immediately before gh workflow run; filters gh run list by --event workflow_dispatch --branch "$BRANCH"; pipes JSON to jq --arg ts "$DISPATCH_TIME" 'select(.createdAt >= $ts)' — pre-existing in-flight runs and runs on other branches (e.g. scheduled triggers) cannot be picked up
  • GH_API_DELAY env var (default 5s) controls post-dispatch polling delay
  • Fail-hard validation: artifact download failures and missing latest_analysis.json terminate the slice with a non-zero exit; mode mismatch is a hard error
  • Downloads artifacts per slice and validates .agent/latest_analysis.json — prints mode, status, architecture_score, changed_files, and pr_url
  • MERGE_BASELINE_PR — synchronous, confirmed merge: explicit gh pr merge --squash (no --auto, no || true); polls gh pr view --json state until state == MERGED before dispatching any slice — guarantees all slices run against the fully merged baseline

Slice sequence:

Slice Mode Lanes
report report
parse_errors mutate parse_errors
import_cycles mutate import_cycles
entrypoint_consolidation mutate entrypoint_consolidation
hygiene mutate hygiene
campaign campaign

Bootstrap script — scripts/bootstrap_repo_architect_slices.sh

Validates that the workflow and runner script are present at their canonical tracked paths and marks the runner executable. Does not embed or overwrite file contents — the committed files are the single source of truth, eliminating any risk of heredoc drift.

Related Issues

Test Evidence

  • bash -n syntax validation passes on both shell scripts
  • YAML validity confirmed via python -c "import yaml; yaml.safe_load(...)"
  • python -c "import ast; ast.parse(...)" confirms no Python syntax errors in repo_architect.py
  • All 57 tests/test_repo_architect.py tests pass against the restored repo_architect.py
  • jq --arg pattern validated: piped invocation correctly filters runs by createdAt >= DISPATCH_TIME
  • CodeQL scan: 0 alerts

Checklist

  • Tests pass locally (pytest tests/)
  • Code is formatted (black . and isort .)
  • Documentation updated (if applicable)
  • No secrets or credentials committed
  • Related issue linked above
Original prompt

You are working inside the BraveNewCapital/GodelOS_StrangeDevLoop repository.

Goal:
Implement a robust "fat slices" repo-architect execution path that can run repeated report and mutation slices through GitHub Actions with minimal manual intervention, using the existing repo_architect.py infrastructure where possible and only adding thin wrapper scripts/workflow logic where necessary.

What to build:

  1. Update or create .github/workflows/repo-architect.yml

    • Keep workflow_dispatch.
    • Support these inputs:
      • mode: analyze | report | mutate
      • lane: optional string
      • targets: optional comma-separated list of files/modules
      • github_model: default openai/gpt-4.1
      • report_path: default docs/repo_architect/runtime_inventory.md
      • mutation_budget: default 1
      • allow_dirty: true|false, default true
    • Keep scheduled execution if already present.
    • Use permissions:
      • contents: write
      • pull-requests: write
      • models: read
    • Configure git identity before running mutations.
    • Export:
      • GITHUB_MODEL
      • REPO_ARCHITECT_LANE
      • REPO_ARCHITECT_TARGETS
    • Invoke repo_architect.py using existing CLI flags when available.
    • Detect optional support for --lane and --targets by inspecting python repo_architect.py --help.
    • Always upload artifacts from:
      • .agent
      • docs/repo_architect
    • Include hidden files in artifact upload.
  2. Add scripts/run_repo_architect_slices.sh

    • This is the main orchestration script.
    • It should:
      • fail hard on errors
      • require gh and python3
      • auto-detect repo via gh repo view
      • optionally merge a baseline PR if MERGE_BASELINE_PR is set
      • dispatch workflow runs using gh workflow run
      • wait for completion with gh run watch --exit-status
      • download the produced artifact
      • inspect .agent/latest_analysis.json
      • assert the expected mode/status
      • print the resulting PR URL, validation summary, architecture score, changed files count
    • It must run these slices in sequence:
      1. report packet
      2. parse fix for tests/nlu_nlg/nlu/test_pipeline.py
      3. parse fix for tests/nlu_nlg/nlu/test_lexical_analyzer_parser.py
      4. import cycle slice for:
        • backend/core/agentic_daemon_system.py
        • backend/core/grounding_coherence_daemon.py
      5. import cycle slice for:
        • godelOS/core_kr/knowledge_store/chroma_store.py
        • godelOS/core_kr/knowledge_store/interface.py
      6. import cycle slice for:
        • godelOS/core_kr/type_system/manager.py
        • godelOS/core_kr/type_system/visitor.py
    • Use lane names that make the slice intent obvious, such as:
      • report_packet
      • parse_repair:test_pipeline
      • parse_repair:test_lexical_analyzer_parser
      • import_cycle:agentic_grounding
      • import_cycle:chroma_interface
      • import_cycle:type_system
  3. Add scripts/bootstrap_repo_architect_slices.sh

    • This should create or overwrite:
      • .github/workflows/repo-architect.yml
      • scripts/run_repo_architect_slices.sh
    • Make the runner script executable.
    • This script is just a bootstrap convenience for local use.

Implementation constraints:

  • Reuse existing repo_architect.py behavior and existing repository conventions.
  • Do not invent a new architecture layer if the existing script already supports the workflow.
  • If repo_architect.py does not support lane or targets as CLI flags, fall back to environment variables only.
  • Keep changes surgical.
  • Do not break the existing report-only path.
  • Do not remove existing useful workflow inputs unless they are redundant.
  • Do not add placeholders or pseudo-code. All shell scripts must be complete and executable.
  • Prefer bash and standard Unix tooling already available on GitHub Ubuntu runners.

Acceptance criteria:

  • Running the bootstrap script writes the workflow and runner scripts successfully.
  • Running ./scripts/run_repo_architect_slices.sh dispatches the workflow multiple times without manual editing.
  • Each run downloads artifacts and validates .agent/latest_analysis.json.
  • The workflow uploads .agent and docs/repo_architect artifacts successfully.
  • The implementation is clean enough to commit directly.

Deliverables:

  • Modified or created .github/workflows/repo-architect.yml
  • New scripts/run_repo_architect_slices.sh
  • New scripts/bootstrap_repo_architect_slices.sh

After making the changes:

  • Show the exact diff summary.
  • Explain any assumptions about current repo_architect.py CLI support.
  • If you had to use environment-variable fallback for lane/targets, say so explicitly.

Custom agent used: Oli
Oliver Chimera Profile


📱 Kick off Copilot coding agent tasks wherever you are with GitHub Mobile, available on iOS and Android.

- Update .github/workflows/repo-architect.yml with lane/targets/allow_dirty
  inputs, GITHUB_MODEL/REPO_ARCHITECT_LANE/REPO_ARCHITECT_TARGETS env
  exports, dynamic CLI-flag detection, bash-array quoting, and
  include-hidden-files artifact upload
- Add scripts/run_repo_architect_slices.sh — 6-slice orchestration script
  with race-condition-safe run-ID polling and configurable GH_API_DELAY
- Add scripts/bootstrap_repo_architect_slices.sh — local bootstrap helper

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 12, 2026

🧪 CI — Python 3.10

�[31mERROR�[0m tests/test_cognitive_subsystem_activation.py::�[1mTestEndToEndFlow::test_context_engine_round_trip�[0m - ModuleNotFoundError: No module named 'requests'
�[31mERROR�[0m tests/test_cognitive_subsystem_activation.py::�[1mTestEndToEndFlow::test_nlg_pipeline_process�[0m - ModuleNotFoundError: No module named 'requests'
�[31m===== �[31m�[1m4 failed�[0m, �[32m1072 passed�[0m, �[33m81 skipped�[0m, �[33m65 warnings�[0m, �[31m�[1m22 errors�[0m�[31m in 38.58s�[0m�[31m ======�[0m

@github-actions
Copy link
Copy Markdown

github-actions bot commented Mar 12, 2026

🧪 CI — Python 3.11

�[31mERROR�[0m tests/test_cognitive_subsystem_activation.py::�[1mTestEndToEndFlow::test_context_engine_round_trip�[0m - ModuleNotFoundError: No module named 'requests'
�[31mERROR�[0m tests/test_cognitive_subsystem_activation.py::�[1mTestEndToEndFlow::test_nlg_pipeline_process�[0m - ModuleNotFoundError: No module named 'requests'
�[31m===== �[31m�[1m4 failed�[0m, �[32m1072 passed�[0m, �[33m81 skipped�[0m, �[33m65 warnings�[0m, �[31m�[1m22 errors�[0m�[31m in 40.24s�[0m�[31m ======�[0m

@Steake Steake marked this pull request as ready for review March 12, 2026 23:21
@Steake Steake self-requested a review as a code owner March 12, 2026 23:21
Copilot AI review requested due to automatic review settings March 12, 2026 23:21
@Steake
Copy link
Copy Markdown

Steake commented Mar 12, 2026

Ok

Copilot AI changed the title [WIP] Add robust execution path for fat slices in repo architect feat: fat-slices repo-architect execution path via GitHub Actions Mar 12, 2026
Copilot AI requested a review from Steake March 12, 2026 23:23
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds a “fat slices” execution path for repo_architect.py by extending the GitHub Actions workflow interface and introducing local scripts to dispatch multiple sequential workflow runs and collect/validate their artifacts.

Changes:

  • Extends .github/workflows/repo-architect.yml with lane, targets, and allow_dirty inputs; uploads hidden artifacts; and adds dynamic CLI flag detection for --lane/--targets.
  • Adds scripts/run_repo_architect_slices.sh to dispatch and monitor a fixed sequence of slice runs and download artifacts per run.
  • Adds scripts/bootstrap_repo_architect_slices.sh to (re)write the workflow and runner scripts into canonical repo locations.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 6 comments.

File Description
scripts/run_repo_architect_slices.sh New orchestrator to dispatch 6 sequential slices, watch runs, download artifacts, and summarize latest_analysis.json.
scripts/bootstrap_repo_architect_slices.sh New bootstrap script that overwrites the workflow + runner script via heredocs and marks the runner executable.
.github/workflows/repo-architect.yml Adds workflow inputs/env for lane/targets, conditional extra args construction, and uploads hidden files in artifacts.

Comment on lines +11 to +16
for cmd in gh python3 jq; do
if ! command -v "$cmd" &>/dev/null; then
echo "ERROR: required command '$cmd' not found." >&2
exit 1
fi
done
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

python3 is listed as a required dependency but this script doesn’t invoke it (JSON parsing is done with jq). Consider either removing python3 from the dependency check or using it for validation, to avoid failing on systems that otherwise have the needed tooling.

Copilot uses AI. Check for mistakes.
Comment on lines +19 to +25
cat > "$WORKFLOW_DIR/repo-architect.yml" << 'WORKFLOW_EOF'
name: repo-architect

on:
workflow_dispatch:
inputs:
mode:
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This bootstrap script embeds full copies of both the workflow and runner script as heredocs and overwrites the tracked files each time it’s run. That duplication can easily drift from the committed versions (or accidentally wipe local edits). Consider generating these files by copying from the tracked paths (or by keeping a single source of truth) so the bootstrap step can’t get out of sync.

Copilot uses AI. Check for mistakes.
Comment on lines +88 to +113
GITHUB_MODEL: ${{ github.event.inputs.github_model || 'openai/gpt-4.1' }}
REPO_ARCHITECT_LANE: ${{ github.event.inputs.lane || '' }}
REPO_ARCHITECT_TARGETS: ${{ github.event.inputs.targets || '' }}
run: |
MODE="${{ github.event.inputs.mode }}"
MODEL="${{ github.event.inputs.github_model }}"
REPORT_PATH="${{ github.event.inputs.report_path }}"
MUTATION_BUDGET="${{ github.event.inputs.mutation_budget }}"
ALLOW_DIRTY="${{ github.event.inputs.allow_dirty }}"
if [ -z "$MODE" ]; then MODE="report"; fi
if [ -z "$MODEL" ]; then MODEL="openai/gpt-4.1"; fi
if [ -z "$REPORT_PATH" ]; then REPORT_PATH="docs/repo_architect/runtime_inventory.md"; fi
if [ -z "$MUTATION_BUDGET" ]; then MUTATION_BUDGET="1"; fi
export GITHUB_MODEL="$MODEL"
python repo_architect.py --allow-dirty --mode "$MODE" --report-path "$REPORT_PATH" --mutation-budget "$MUTATION_BUDGET"
if [ -z "$ALLOW_DIRTY" ]; then ALLOW_DIRTY="true"; fi

# Detect optional --lane / --targets CLI support; fall back to env vars if absent
HELP_TEXT="$(python repo_architect.py --help 2>&1)"
EXTRA_ARGS=()
if echo "$HELP_TEXT" | grep -q -- '--lane'; then
if [ -n "$REPO_ARCHITECT_LANE" ]; then
EXTRA_ARGS+=(--lane "$REPO_ARCHITECT_LANE")
fi
fi
if echo "$HELP_TEXT" | grep -q -- '--targets'; then
if [ -n "$REPO_ARCHITECT_TARGETS" ]; then
EXTRA_ARGS+=(--targets "$REPO_ARCHITECT_TARGETS")
fi
fi
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lane/targets inputs are exported/detected here, but repo_architect.py currently does not accept --lane/--targets and also does not read REPO_ARCHITECT_LANE / REPO_ARCHITECT_TARGETS from the environment. As a result, dispatched “slice” runs cannot actually be scoped by lane/targets (these values are effectively ignored). Consider either implementing lane/targets handling in repo_architect.py (CLI flags or env var parsing) or removing these workflow inputs/args until the backend supports them to avoid a misleading interface.

Copilot uses AI. Check for mistakes.
Comment on lines +133 to +149
gh run download "$RUN_ID" \
--repo "$REPO" \
--dir "$SLICE_DIR" \
--pattern "repo-architect-$RUN_ID" 2>/dev/null || \
gh run download "$RUN_ID" \
--repo "$REPO" \
--dir "$SLICE_DIR" || true

# Locate the analysis JSON (may be nested under artifact folder)
ANALYSIS_FILE="$(find "$SLICE_DIR" -name "latest_analysis.json" -type f | head -1 || true)"

echo ""
echo "--- Validation for slice: $slice_name ---"

if [ -z "$ANALYSIS_FILE" ]; then
echo "WARNING: latest_analysis.json not found in downloaded artifacts."
else
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Artifact download failures and missing latest_analysis.json are currently treated as non-fatal (|| true + WARNING). This conflicts with the script’s stated goal of “fail hard on errors” and can silently mark a slice as successful even when no artifacts were produced. Consider failing the slice if artifact download fails or if latest_analysis.json cannot be found/parsed, and assert expected mode/status values instead of only warning on mismatches.

Copilot uses AI. Check for mistakes.
for attempt in $(seq 1 "$max_attempts"); do
run_id="$(gh run list \
--repo "$REPO" \
--workflow "$WORKFLOW_FILE" \
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wait_for_new_run_id selects the newest run for the workflow without filtering by branch/ref (and without verifying inputs). If another run for the same workflow starts on a different branch (e.g., the scheduled trigger), this can pick the wrong run ID even if it’s “newer” than before_id. Consider filtering gh run list by the target branch/ref (and ideally event type) or querying for the run matching the dispatch ref/created timestamp.

Suggested change
--workflow "$WORKFLOW_FILE" \
--workflow "$WORKFLOW_FILE" \
--branch "$BRANCH" \

Copilot uses AI. Check for mistakes.
# ---------------------------------------------------------------------------
if [ -n "${MERGE_BASELINE_PR:-}" ]; then
echo "Merging baseline PR #$MERGE_BASELINE_PR …"
gh pr merge "$MERGE_BASELINE_PR" --repo "$REPO" --squash --auto || true
Copy link

Copilot AI Mar 12, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When MERGE_BASELINE_PR is set, merge failures are ignored (|| true), and gh pr merge --auto can return before the PR is actually merged. That means subsequent slices may run against an unexpected code baseline. Consider failing fast if the merge command fails, and/or waiting until the PR is fully merged (or explicitly merged without --auto) before dispatching slices.

Suggested change
gh pr merge "$MERGE_BASELINE_PR" --repo "$REPO" --squash --auto || true
gh pr merge "$MERGE_BASELINE_PR" --repo "$REPO" --squash

Copilot uses AI. Check for mistakes.
@Steake
Copy link
Copy Markdown

Steake commented Mar 13, 2026

@copilot apply changes based on the comments in this thread

- run_repo_architect_slices.sh: remove unused python3 dep check, add
  --branch filter to wait_for_new_run_id and BEFORE_RUN_ID snapshot,
  fail hard on artifact download failure and missing latest_analysis.json,
  hard-fail on mode mismatch, remove --auto/||true from MERGE_BASELINE_PR
- bootstrap_repo_architect_slices.sh: replace embedded heredocs with cp
  from tracked canonical paths (eliminates drift risk)
- repo_architect.py: add lane/targets to Config; read REPO_ARCHITECT_LANE
  and REPO_ARCHITECT_TARGETS env vars in build_config(); add
  _apply_lane_filter() to scope analysis to declared targets; route
  build_patch_plan() by lane prefix so lane/targets are semantically real

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
Copy link
Copy Markdown
Author

Copilot AI commented Mar 13, 2026

@copilot apply changes based on the comments in this thread

All six review comments addressed in commit 9a43d7c. Here's the full breakdown:

scripts/run_repo_architect_slices.sh

  • python3 dep check — removed; it was never called in the script (all JSON parsing is done with jq).
  • MERGE_BASELINE_PR merge — removed --auto and || true; any merge failure now propagates immediately as a hard error so subsequent slices don't run against an unexpected baseline.
  • wait_for_new_run_id race condition — both the pre-dispatch snapshot and the polling loop now pass --branch "$BRANCH" to gh run list, so runs on other branches (e.g. scheduled triggers on main) can't be picked up by mistake.
  • Artifact failure handling|| true removed from download; latest_analysis.json absence now calls return 1 instead of a WARNING; mode mismatch in the analysis JSON is now a hard error (return 1) instead of a warning.

scripts/bootstrap_repo_architect_slices.sh

  • Replaced the embedded heredocs with a simple validator/chmod +x that operates on the already-tracked files. The workflow YAML and runner script have a single canonical location in the repo — the bootstrap now just confirms they're present and marks the runner executable, so drift is structurally impossible.

repo_architect.py

  • Added lane: Optional[str] and targets: List[str] to Config.
  • build_config() reads REPO_ARCHITECT_LANE and REPO_ARCHITECT_TARGETS from env (targets are comma-split).
  • New _apply_lane_filter() scopes parse_error_files, cycles, and python_files in the analysis to the declared targets before plan building.
  • build_patch_plan() now routes by lane prefix: report* → report plan, parse_repair*/import_cycle* → scoped report plan, no lane → existing default cascade. The env vars are no longer silently ignored.

@Steake
Copy link
Copy Markdown

Steake commented Mar 13, 2026

@copilot Rework PR #7 on top of current main after PR #8 is merged. Keep PR #8’s hardening intact. Then fix the two orchestration defects called out in review:

  1. wait_for_new_run_id must not pick unrelated workflow runs; filter by the intended branch/ref and bind run selection to the current dispatch sequence.
  2. MERGE_BASELINE_PR must not use gh pr merge --auto || true; do an explicit squash merge, fail fast on merge failure, and wait until the baseline PR is actually merged before continuing.
    Do not broaden scope. Do not regress any behavior from PR feat: complete repo_architect methodology — lane scoping, enhanced validation, entrypoint_consolidation, 57 tests, campaign report #8. Push the fixes to PR feat: fat-slices repo-architect execution path via GitHub Actions #7.

@Steake
Copy link
Copy Markdown

Steake commented Mar 13, 2026

@copilot Rework PR #7 on top of current main after PR #8 is merged. Keep PR #8’s hardening intact. Then fix the two orchestration defects called out in review:

  1. wait_for_new_run_id must not pick unrelated workflow runs; filter by the intended branch/ref and bind run selection to the current dispatch sequence.
  2. MERGE_BASELINE_PR must not use gh pr merge --auto || true; do an explicit squash merge, fail fast on merge failure, and wait until the baseline PR is actually merged before continuing.
    Do not broaden scope. Do not regress any behavior from PR feat: complete repo_architect methodology — lane scoping, enhanced validation, entrypoint_consolidation, 57 tests, campaign report #8. Push the fixes to PR feat: fat-slices repo-architect execution path via GitHub Actions #7.

…x both orchestration defects

- Restore repo_architect.py and repo-architect.yml to exact main state
  (PR #8 hardening fully intact; PR #7's superseded lane/targets additions removed)
- Rewrite run_repo_architect_slices.sh for PR #8 workflow interface:
  dispatch with mode + lanes (not lane/targets/allow_dirty which no longer exist)
- Slice sequence: report → parse_errors → import_cycles →
  entrypoint_consolidation → hygiene → campaign
- Fix wait_for_new_run_id: filter by --event workflow_dispatch + --branch,
  record DISPATCH_TIME before dispatch, jq-filter createdAt >= dispatch_time
  so pre-existing and cross-branch runs are excluded; pipe to jq separately
  so --arg is correctly resolved (gh -q does not forward jq flags)
- Fix MERGE_BASELINE_PR: explicit squash merge, fail fast on merge error,
  poll gh pr view until state=MERGED before dispatching any slices

Co-authored-by: Steake <530040+Steake@users.noreply.github.com>
Copy link
Copy Markdown
Author

Copilot AI commented Mar 13, 2026

@copilot Rework PR #7 on top of current main after PR #8 is merged. Keep PR #8’s hardening intact. Then fix the two orchestration defects called out in review:

  1. wait_for_new_run_id must not pick ...

Done in commit ce85436. Here's what was changed:

Rebased on main (PR #8 intact): repo_architect.py and .github/workflows/repo-architect.yml are now byte-for-byte identical to main — all 57 PR #8 tests pass.

run_repo_architect_slices.sh — new workflow interface: The workflow no longer accepts lane/targets/allow_dirty inputs. The script now dispatches with mode + lanes (PR #8's interface). Slice sequence updated:

  • report → mode=report
  • parse_errors → mode=mutate lanes=parse_errors
  • import_cycles → mode=mutate lanes=import_cycles
  • entrypoint_consolidation → mode=mutate lanes=entrypoint_consolidation
  • hygiene → mode=mutate lanes=hygiene
  • campaign → mode=campaign

Fix 1 — wait_for_new_run_id bound to current dispatch sequence:

  • Captures DISPATCH_TIME (UTC ISO-8601) immediately before gh workflow run
  • Filters gh run list by --event workflow_dispatch --branch "$BRANCH" — scheduled runs and runs on other branches are excluded
  • Pipes JSON to jq --arg ts "$DISPATCH_TIME" 'select(.createdAt >= $ts)' — runs that were already in flight before our dispatch are excluded
  • Both BEFORE_RUN_ID snapshot and the poll loop apply the same event+branch+timestamp filters

Fix 2 — MERGE_BASELINE_PR synchronous merge:

  • Explicit gh pr merge --squash with no --auto, no || true — fails immediately on error
  • Polls gh pr view --json state until state == MERGED (5 s interval, 5 min timeout) before dispatching any slices

@copilot Rework PR #7 on top of current main after PR #8 is merged. Keep PR #8’s hardening intact. Then fix the two orchestration defects called out in review:

  1. wait_for...

Same fix as above (ce85436) — both orchestration defects addressed with no regression to PR #8's hardening.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • models.github.ai
    • Triggering command: /usr/bin/python python -m pytest /tmp/test_repo_architect.py -v --noconftest (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

@Steake Steake merged commit 7c09c90 into main Mar 13, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants